Goto

Collaborating Authors

 super resolution



Test-Time Anchoring for Discrete Diffusion Posterior Sampling

Rout, Litu, Lugmayr, Andreas, Jafarian, Yasamin, Varadharajan, Srivatsan, Caramanis, Constantine, Shakkottai, Sanjay, Kemelmacher-Shlizerman, Ira

arXiv.org Machine Learning

We study the problem of posterior sampling using pretrained discrete diffusion foundation models, aiming to recover images from noisy measurements without retraining task-specific models. While diffusion models have achieved remarkable success in generative modeling, most advances rely on continuous Gaussian diffusion. In contrast, discrete diffusion offers a unified framework for jointly modeling categorical data such as text and images. Beyond unification, discrete diffusion provides faster inference, finer control, and principled training-free Bayesian inference, making it particularly well-suited for posterior sampling. However, existing approaches to discrete diffusion posterior sampling face severe challenges: derivative-free guidance yields sparse signals, continuous relaxations limit applicability, and split Gibbs samplers suffer from the curse of dimensionality. To overcome these limitations, we introduce Anchored Posterior Sampling (APS) for masked diffusion foundation models, built on two key innovations -- quantized expectation for gradient-like guidance in discrete embedding space, and anchored remasking for adaptive decoding. Our approach achieves state-of-the-art performance among discrete diffusion samplers across linear and nonlinear inverse problems on the standard benchmarks. We further demonstrate the benefits of our approach in training-free stylization and text-guided editing.





A Details of the objective

Neural Information Processing Systems

In our main paper, we describe our methods based on the "V ariance Exploding" hyperparameters Therefore, we can use "V ariance Preserving" Note that although the inference algorithms are shown to be equivalent, the choice between "V ariance Preserving" and "V ariance Exploding" may affect the training of diffusion networks. The proof uses a basic property of Gaussian marginals (see [ 4 ] for the complete version). In denoising, the corrupted image is the original image with additive white Gaussian noise. Equation 23 is the SVD of H . The grayscale image is obtained by averaging the red, green, and blue channels of each pixel.



I'm in love with an ultra-specific Windows Copilot feature

PCWorld

I don't use a Windows Copilot PC as a daily driver, though I have several in my office. But there's one absolutely critical Copilot feature that forces me to swap out my current laptop, attach a Copilot PC to my docking station, and boot it up. Very few people have bought a Copilot PC in the last year. So these features, which are currently locked to Copilot PCs and their NPU, aren't well known: Windows Recall; Paint's Cocreator, Generative Erase, Object Select, and Sticker Generator; Click-to-Do; Photos' Super Resolution, Relight and Restyle Image; the intelligent search features within the Settings menu; Windows Studio Effects; and Live Captions. My editor assumed I would prefer the last feature, Live Captions, probably because it's both useful and cool.


Exploiting the Exact Denoising Posterior Score in Training-Free Guidance of Diffusion Models

Bellchambers, Gregory

arXiv.org Machine Learning

The success of diffusion models has driven interest in performing conditional sampling via training-free guidance of the denoising process to solve image restoration and other inverse problems. A popular class of methods, based on Diffusion Posterior Sampling (DPS), attempts to approximate the intractable posterior score function directly. In this work, we present a novel expression for the exact posterior score for purely denoising tasks that is tractable in terms of the unconditional score function. We leverage this result to analyze the time-dependent error in the DPS score for denoising tasks and compute step sizes on the fly to minimize the error at each time step. We demonstrate that these step sizes are transferable to related inverse problems such as colorization, random inpainting, and super resolution. Despite its simplicity, this approach is competitive with state-of-the-art techniques and enables sampling with fewer time steps than DPS.


You can now use AI in Teams to improve poor quality video calls

PCWorld

Tech giant Microsoft has announced that you can now try out the Super Resolution feature in the Teams app, but only if you have a Copilot PC with a Snapdragon X chip. Super Resolution uses AI to improve the resolution of video calls in Teams, even if you have a poor internet connection that forces you to stream resolutions as low as 360p. Using the AI capabilities of Copilot PCs, Super Resolution can artificially scale up the resolution of a video stream without compromising the overall picture quality. To avoid draining your laptop battery -- because using AI to upscale video calls can be demanding -- the default setting is that Super Resolution can only be used when you're plugged into power. Want to give it a whirl?